CS 15 - 859 : Algorithms for Big Data Fall 2017 Lecture 10 — Nov
نویسندگان
چکیده
And recall that the INDEX model has the lower bound for deterministic communication complexity: CCδ(INDEX) ≥ I(M ;X|R) ≥ n(1−H(δ)), where R is the shared common random string. We need a lower bound when conditioning on R for our earlier Gap-Hamming lower bound, which was a reduction from INDEX using the shared common random string R. Definition. Given (X,Y ) ∼ μ, the μ-distributional communication complexity of a function f(X,Y ) over the distribution μ, denoted by Dμ(f), is the minimum cost of a protocol that gives the correct answer with probability at least 2/3.
منابع مشابه
CS 15 - 859 : Algorithms for Big Data Fall 2017 Lecture 11 - Part 2 – Thursday 11 / 16 / 2017
متن کامل
CS 15 - 859 : Algorithms for Big Data Fall 2017
Suppose there are several movies that can be grouped into several categories (eg. action, comedy, historical, cartoon, magical) and each movie is rated by several people. Each rater has his own distribution from which he chooses to assign scores to movies, so it seems natural to scale each rater’s score by the inverse of his distribution’s standard deviation. Therefore a weight matrix W can be ...
متن کامل